7 research outputs found

    Decoding Neural Activity to Assess Individual Latent State in Ecologically Valid Contexts

    Full text link
    There exist very few ways to isolate cognitive processes, historically defined via highly controlled laboratory studies, in more ecologically valid contexts. Specifically, it remains unclear as to what extent patterns of neural activity observed under such constraints actually manifest outside the laboratory in a manner that can be used to make an accurate inference about the latent state, associated cognitive process, or proximal behavior of the individual. Improving our understanding of when and how specific patterns of neural activity manifest in ecologically valid scenarios would provide validation for laboratory-based approaches that study similar neural phenomena in isolation and meaningful insight into the latent states that occur during complex tasks. We argue that domain generalization methods from the brain-computer interface community have the potential to address this challenge. We previously used such an approach to decode phasic neural responses associated with visual target discrimination. Here, we extend that work to more tonic phenomena such as internal latent states. We use data from two highly controlled laboratory paradigms to train two separate domain-generalized models. We apply the trained models to an ecologically valid paradigm in which participants performed multiple, concurrent driving-related tasks. Using the pretrained models, we derive estimates of the underlying latent state and associated patterns of neural activity. Importantly, as the patterns of neural activity change along the axis defined by the original training data, we find changes in behavior and task performance consistent with the observations from the original, laboratory paradigms. We argue that these results lend ecological validity to those experimental designs and provide a methodology for understanding the relationship between observed neural activity and behavior during complex tasks

    During natural viewing, neural processing of visual targets continues throughout saccades

    Full text link
    Relatively little is known about visual processing during free-viewing visual search in realistic dynamic environments. Free-viewing is characterized by frequent saccades. During saccades, visual processing is thought to be suppressed, yet we know that the presaccadic visual content can modulate postsaccadic processing. To better understand these processes in a realistic setting, we study here saccades and neural responses elicited by the appearance of visual targets in a realistic virtual environment. While subjects were being driven through a 3D virtual town, they were asked to discriminate between targets that appear on the road. Using a system identification approach, we separated overlapping and correlated activity evoked by visual targets, saccades, and button presses. We found that the presence of a target enhances early occipital as well as late frontocentral saccade-related responses. The earlier potential, shortly after 125 ms post-saccade onset, was enhanced for targets that appeared in the peripheral vision as compared to the central vision, suggesting that fast peripheral processing initiated before saccade onset. The later potential, at 195 ms post-saccade onset, was strongly modulated by the visibility of the target. Together these results suggest that, during natural viewing, neural processing of the presaccadic visual stimulus continues throughout the saccade, apparently unencumbered by saccadic suppression

    The effect of target and non-target similarity on neural classification performance: a boost from confidence

    Get PDF
    Brain computer interaction (BCI) technologies have proven effective in utilizing single-trial classification algorithms to detect target images in rapid serial visualization presentation tasks. While many factors contribute to the accuracy of these algorithms, a critical aspect that is often overlooked concerns the feature similarity between target and non-target images. In most real-world environments there are likely to be many shared features between targets and non-targets resulting in similar neural activity between the two classes. It is unknown how current neural-based target classification algorithms perform when qualitatively similar target and non-target images are presented. This study address this question by comparing behavioral and neural classification performance across two conditions: first, when targets were the only infrequent stimulus presented amongst frequent background distracters; and second when targets were presented together with infrequent non-targets containing similar visual features to the targets. The resulting findings show that behavior is slower and less accurate when targets are presented together with similar non-targets; moreover, single-trial classification yielded high levels of misclassification when infrequent non-targets are included. Furthermore, we present an approach to mitigate the image misclassification. We use confidence measures to assess the quality of single-trial classification, and demonstrate that a system in which low confidence trials are reclassified through a secondary process can result in improved performance

    Linear and Nonlinear Properties of Feature Selectivity in V4 Neurons

    No full text
    Extrastriate area V4 is a critical cortical component of visual form processing in both humans and non-human primates. The tuning of V4 neurons shows an intermediate level of complexity that lies between the narrow band orientation and spatial frequency tuning of neurons in primary visual cortex and the highly complex object selectivity seen in inferotemporal neurons. Single neuron recording studies in the monkey have demonstrated that V4 neurons can be highly selective for complex properties of visual stimuli, like contour curvature (Pasupathy and Connor, 1999) and the relative positions of object features in the receptive field (Pasupathy and Connor, 2001). However, the origins of complex feature selectivity and the specific circuits that transform the relatively simple wavelet-like encoding seen in primary visual cortex into the more complex selectivity profiles observed in V4 are not well understood. This is especially true in the case of selectivity for features occurring in natural scene stimuli. Little is known about how the selectivity of V4 neurons to isolated stimuli is altered when those stimuli appear in the context of a spectrally complex natural scene. Previous work using quasi-linear system identification methods has shown that some, but not all, of the responses of V4 neurons to natural stimuli can be accounted for by a neuron’s orientation and spatial frequency tuning (David et al., 2006). In this study we assessed the degree to which preferences for natural images can really be inferred from classical orientation and spatial frequency tuning functions. Using a psychophysically-inspired method we isolated and identified the specific visual driving features occurring in natural scene photographs that reliably elicit firing from single V4 neurons. We then compared the measured driving features to those predicted to drive each cell based on the linear spectral receptive field (SRF), which was estimated from responses to narrowband sinusoidal gratin
    corecore